5,754 research outputs found

    A complement to Le Cam's theorem

    Get PDF
    This paper examines asymptotic equivalence in the sense of Le Cam between density estimation experiments and the accompanying Poisson experiments. The significance of asymptotic equivalence is that all asymptotically optimal statistical procedures can be carried over from one experiment to the other. The equivalence given here is established under a weak assumption on the parameter space F\mathcal{F}. In particular, a sharp Besov smoothness condition is given on F\mathcal{F} which is sufficient for Poissonization, namely, if F\mathcal{F} is in a Besov ball Bp,qα(M)B_{p,q}^{\alpha}(M) with αp>1/2\alpha p>1/2. Examples show Poissonization is not possible whenever αp<1/2\alpha p<1/2. In addition, asymptotic equivalence of the density estimation model and the accompanying Poisson experiment is established for all compact subsets of C([0,1]m)C([0,1]^m), a condition which includes all H\"{o}lder balls with smoothness α>0\alpha>0.Comment: Published at http://dx.doi.org/10.1214/009053607000000091 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Asymptotic equivalence and adaptive estimation for robust nonparametric regression

    Get PDF
    Asymptotic equivalence theory developed in the literature so far are only for bounded loss functions. This limits the potential applications of the theory because many commonly used loss functions in statistical inference are unbounded. In this paper we develop asymptotic equivalence results for robust nonparametric regression with unbounded loss functions. The results imply that all the Gaussian nonparametric regression procedures can be robustified in a unified way. A key step in our equivalence argument is to bin the data and then take the median of each bin. The asymptotic equivalence results have significant practical implications. To illustrate the general principles of the equivalence argument we consider two important nonparametric inference problems: robust estimation of the regression function and the estimation of a quadratic functional. In both cases easily implementable procedures are constructed and are shown to enjoy simultaneously a high degree of robustness and adaptivity. Other problems such as construction of confidence sets and nonparametric hypothesis testing can be handled in a similar fashion.Comment: Published in at http://dx.doi.org/10.1214/08-AOS681 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Sparse CCA: Adaptive Estimation and Computational Barriers

    Get PDF
    Canonical correlation analysis is a classical technique for exploring the relationship between two sets of variables. It has important applications in analyzing high dimensional datasets originated from genomics, imaging and other fields. This paper considers adaptive minimax and computationally tractable estimation of leading sparse canonical coefficient vectors in high dimensions. First, we establish separate minimax estimation rates for canonical coefficient vectors of each set of random variables under no structural assumption on marginal covariance matrices. Second, we propose a computationally feasible estimator to attain the optimal rates adaptively under an additional sample size condition. Finally, we show that a sample size condition of this kind is needed for any randomized polynomial-time estimator to be consistent, assuming hardness of certain instances of the Planted Clique detection problem. The result is faithful to the Gaussian models used in the paper. As a byproduct, we obtain the first computational lower bounds for sparse PCA under the Gaussian single spiked covariance model

    Minimax estimation with thresholding and its application to wavelet analysis

    Full text link
    Many statistical practices involve choosing between a full model and reduced models where some coefficients are reduced to zero. Data were used to select a model with estimated coefficients. Is it possible to do so and still come up with an estimator always better than the traditional estimator based on the full model? The James-Stein estimator is such an estimator, having a property called minimaxity. However, the estimator considers only one reduced model, namely the origin. Hence it reduces no coefficient estimator to zero or every coefficient estimator to zero. In many applications including wavelet analysis, what should be more desirable is to reduce to zero only the estimators smaller than a threshold, called thresholding in this paper. Is it possible to construct this kind of estimators which are minimax? In this paper, we construct such minimax estimators which perform thresholding. We apply our recommended estimator to the wavelet analysis and show that it performs the best among the well-known estimators aiming simultaneously at estimation and model selection. Some of our estimators are also shown to be asymptotically optimal.Comment: Published at http://dx.doi.org/10.1214/009053604000000977 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore